61 research outputs found

    Robust causal structure learning with some hidden variables

    Full text link
    We introduce a new method to estimate the Markov equivalence class of a directed acyclic graph (DAG) in the presence of hidden variables, in settings where the underlying DAG among the observed variables is sparse, and there are a few hidden variables that have a direct effect on many of the observed ones. Building on the so-called low rank plus sparse framework, we suggest a two-stage approach which first removes the effect of the hidden variables, and then estimates the Markov equivalence class of the underlying DAG under the assumption that there are no remaining hidden variables. This approach is consistent in certain high-dimensional regimes and performs favourably when compared to the state of the art, both in terms of graphical structure recovery and total causal effect estimation

    Quantifying identifiability in independent component analysis

    Get PDF
    We are interested in consistent estimation of the mixing matrix in the ICA model, when the error distribution is close to (but different from) Gaussian. In particular, we consider nn independent samples from the ICA model X=AϵX = A\epsilon, where we assume that the coordinates of ϵ\epsilon are independent and identically distributed according to a contaminated Gaussian distribution, and the amount of contamination is allowed to depend on nn. We then investigate how the ability to consistently estimate the mixing matrix depends on the amount of contamination. Our results suggest that in an asymptotic sense, if the amount of contamination decreases at rate 1/n1/\sqrt{n} or faster, then the mixing matrix is only identifiable up to transpose products. These results also have implications for causal inference from linear structural equation models with near-Gaussian additive noise.Comment: 22 pages, 2 figure

    Variable selection in high-dimensional linear models: partially faithful distributions and the PC-simple algorithm

    Full text link
    We consider variable selection in high-dimensional linear models where the number of covariates greatly exceeds the sample size. We introduce the new concept of partial faithfulness and use it to infer associations between the covariates and the response. Under partial faithfulness, we develop a simplified version of the PC algorithm (Spirtes et al., 2000), the PC-simple algorithm, which is computationally feasible even with thousands of covariates and provides consistent variable selection under conditions on the random design matrix that are of a different nature than coherence conditions for penalty-based approaches like the Lasso. Simulations and application to real data show that our method is competitive compared to penalty-based approaches. We provide an efficient implementation of the algorithm in the R-package pcalg.Comment: 20 pages, 3 figure

    Estimating the effect of joint interventions from observational data in sparse high-dimensional settings

    Full text link
    We consider the estimation of joint causal effects from observational data. In particular, we propose new methods to estimate the effect of multiple simultaneous interventions (e.g., multiple gene knockouts), under the assumption that the observational data come from an unknown linear structural equation model with independent errors. We derive asymptotic variances of our estimators when the underlying causal structure is partly known, as well as high-dimensional consistency when the causal structure is fully unknown and the joint distribution is multivariate Gaussian. We also propose a generalization of our methodology to the class of nonparanormal distributions. We evaluate the estimators in simulation studies and also illustrate them on data from the DREAM4 challenge.Comment: 30 pages, 3 figures, 45 pages supplemen

    Current status data with competing risks: Consistency and rates of Convergence of the MLE

    Get PDF
    We study nonparametric estimation of the sub-distribution functions for current status data with competing risks. Our main interest is in the nonparametric maximum likelihood estimator (MLE), and for comparison we also consider a simpler ``naive estimator.'' Both types of estimators were studied by Jewell, van der Laan and Henneman [Biometrika (2003) 90 183--197], but little was known about their large sample properties. We have started to fill this gap, by proving that the estimators are consistent and converge globally and locally at rate n1/3n^{1/3}. We also show that this local rate of convergence is optimal in a minimax sense. The proof of the local rate of convergence of the MLE uses new methods, and relies on a rate result for the sum of the MLEs of the sub-distribution functions which holds uniformly on a fixed neighborhood of a point. Our results are used in Groeneboom, Maathuis and Wellner [Ann. Statist. (2008) 36 1064--1089] to obtain the local limiting distributions of the estimators.Comment: Published in at http://dx.doi.org/10.1214/009053607000000974 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore